), columns=['A', 'B', 'C', 'D', 'E'])
DataFrame data preview:
A B C D E0 0.673092 0.230338 -0.171681 0.312303 -0.1848131 -0.504482 -0.344286 -0.050845 -0.811277 -0.2981812 0.542788 0.207708 0.651379 -0.656214 0.5075953 -0.249410 0.131549 -2.198480 -0.437407 1.628228
Calculate the total data of each column and add it to the end as a new column
df['Col_sum'] = df.apply(lambda x: x.sum(), axis=1)
Calcula
[' col_sum ' = df.apply (lambda x:x.sum (), Axis=1)
Calculates the sum of each row's data and adds it to the end as a new row
df.loc[' row_sum ' = df.apply (lambda x:x.sum ())
Final data results:
A B C D E col_sum0 0.673092 0.230338-0.171681 0.312303-0.184813 0.8592381-0.504482-0.344286- 0.050845-0.811277-0.298181-2.0090712 0.542788 0.207708 0.651379-0.656214 0.507595 1.2532563-0.249410 0.131549-2.1984 80-0.437407 1.628228-1.125520row_sum 0.4619
introduces you about Python in pandas. Dataframe to exclude specific lines of the method, the text gives a detailed example code, I believe that everyone's understanding and learning has a certain reference value, the need for friends to see together below.
2. About pandas in Python. Dataframe add a
in the sense this they ' re an immutable data structure. Therefore things like:
# to create a new column "three"
df[' three ') = Df[' One '] * df[' one ']
Can ' t exist, just because this kind of affectation goes against the principles of Spark. Another example would is trying to access by index a single element within a DataFrame. Don ' t forget that your ' r
']], columns=['p1', 'p2 ...: ', 'p3'])In [4]: dfOut[4]: p1 p2 p30 GD GX FJ1 SD SX BJ2 HN HB AH3 HEN HEN HLJ4 SH TJ CQ
If you only want two rows whose p1 is GD and HN, you can do this:
In [8]: df[df.p1.isin(['GD', 'HN'])]Out[8]: p1 p2 p30 GD GX FJ2 HN HB AH
However, if we want data except the two rows, we need to bypass the point.
The principle is to first extract p1 and convert it to a list, then remove unnecessary rows (values) from the list, and then useisin()
In [9]: ex_list = list(df.p1)In [
Pandas
Spark
Working style
Single machine tool, no parallel mechanism parallelismdoes not support Hadoop and handles large volumes of data with bottlenecks
Distributed parallel computing framework, built-in parallel mechanism parallelism, all data and operations are automatically distributed on each cluster node. Process distributed data in a way that handles in-memory data.Supports Hadoop and can handle large amounts of data
Pandas
Spark
Working style
Single machine tool, no parallel mechanism parallelismdoes not support Hadoop and handles large volumes of data with bottlenecks
Distributed parallel computing framework, built-in parallel mechanism parallelism, all data and operations are automatically distributed on each cluster node. Process distributed data in a way that handles in-memory data.Supports Hadoop and can handle large amounts of data
lines for GD and HN, you can do this:
In [8]: Df[df.p1.isin ([' GD ', ' HN '])]out[8]: p1 p2 p30 GD GX FJ2 HN HB AH
But if we want data beyond these two lines, we need to get around the point.
The principle is to first remove the P1 and convert it to a list, then remove the unwanted rows (values) from the list and then use them in the Dataframeisin()
In [9]: Ex_list = List (DF.P1) in [ten]: Ex_list.remove (' GD ') in [all]: Ex_list.remove (' HN ') in []: ex_listout[12]: [' SD ', ' HE N ', ' sh
:import1 Import matplotlib.pyplot as Plt2 a=series (NP.RANDOM.RANDN (+), Index=pd.date_range (' 20100101 ', periods=1000)) 3 b= A.cumsum () 4 B.plot () 5 plt.show () #最后一定要加这个plt. Show (), or the graph will not appear.2.PNGYou can also use the following code to generate multiple time series diagrams:a=DataFrame(np.random.randn(1000,4),index=pd.date_range(‘20100101‘,periods=1000),columns=list(‘ABCD‘))b=a.cumsum()b.plot()plt.show()3.png 11, Import an
Previously written pandas DataFrame Applymap () functionand pandas Array (pandas Series)-(5) Apply method Custom functionThe applymap () function of the pandas DataFrame and the apply () method of the
[Python logging] importing Pandas Dataframe into Sqlite3 and dataframesqlite3
Use pandas. io connector to input Sqlite
Import sqlite3 as litefrom pandas. io import sqlimport pandas as pd
According to if_exists, input sqlite in three modes:
The following parameters are av
Previous Pandas DataFrame the Apply () function (1) says How to convert DataFrame by using the Apply function to get a new DataFrame.This article describes another use of the dataframe apply () function to get a new
How do I delete the list hollow character?
Easiest way: New_list = [x for x in Li if x! = ']
Today is number No. 5.1.
This section mainly learns the basic operations of pandas based on the previous two data structures.
Data A with dataframe results is shown below: a b cone 4 1 1two 6 2 0three 6 1 6
First, view the data (the method of viewing the object is also applicable fo
Data type to force. Only a single dtype is allowed. If None, infer
Copy : boolean, default False
Copy data from inputs. Only affects dataframe/2d Ndarray input
See Also
DataFrame.from_records
constructor from tuples, also record arrays
DataFrame.from_dict
From Dic
This section describes the basic methods of data in series and Dataframe
Re-index
An important method of Pandas objects is reindex, which is to create a new object that adapts to the new index" "Created on 2016-8-10@author:xuzhengzhu" "" "Created on 2016-8-10
Let's create a data frame by hand.[Python]View PlainCopy
Import NumPy as NP
Import Pandas as PD
DF = PD. DataFrame (Np.arange (0,2). Reshape (3), columns=list (' abc ' )
DF is such a dropSo how do you choose the three ways to pick the data?One, when each column already has column name, with DF [' a '] can choose to take out a whole colum
1. Create a dataframe from a dictionary>>>ImportPandas as PD>>> Dict1 = {'col1': [1,2,5,7],'col2':['a','b','C','D']}>>> DF =PD. DataFrame (Dict1)>>>DF col1 COL201a1 2b2 5C3 7 D2. Create Dataframe from multiple lists (convert the list to a dictionary, then convert the diction
1. Create a dataframe from a dictionary>>>ImportPandas>>> dict_a = {'user_id':['Webbang','Webbang','Webbang'],'book_id':['3713327','4074636','26873486'],'rating':['4','4','4'],'mark_date':['2017-03-07','2017-03-07','2017-03-07']}>>> df = Pandas. DataFrame (DICT_A)#Create a
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.